Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed two-person , in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of Human behavior. It is now an umbrella term for the science of rational Decision-making in humans, animals, and computers.
Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact , which became a standard method in game theory and mathematical economics. His paper was followed by Theory of Games and Economic Behavior (1944), co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s, and was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory in 1999, and fifteen game theorists have won the Nobel Prize in economics as of 2020, including most recently Paul Milgrom and Robert B. Wilson.
In 1838, Antoine Augustin Cournot provided a model of competition in Oligopoly. Though he did not refer to it as such, he presented a solution that is the Nash equilibrium of the game in his Recherches sur les principes mathématiques de la théorie des richesses ( Researches into the Mathematical Principles of the Theory of Wealth). In 1883, Joseph Bertrand critiqued Cournot's model as unrealistic, providing an alternative model of price competition which would later be formalized by Francis Ysidro Edgeworth.Edgeworth, Francis (1889) "The pure theory of monopoly", reprinted in Collected Papers relating to Political Economy 1925, vol.1, Macmillan.
In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels ( On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy is strictly determined.
In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in Finite Game, a conjecture that was proved false by von Neumann. In 1950, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies.
Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, , and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science. The first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection and common knowledge were introduced and analyzed.
In 1994, John Nash was awarded the Nobel Memorial Prize in the Economic Sciences for his contribution to game theory. Nash's most famous contribution to game theory is the concept of the Nash equilibrium, which is a solution concept for non-cooperative games, published in 1951. A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict. Hurwicz introduced and formalized the concept of incentive compatibility.
In 2012, Alvin E. Roth and Lloyd Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole.
Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is different from non-cooperative game theory which focuses on predicting individual players' actions and payoffs by analyzing Nash equilibria.
Cooperative game theory provides a high-level approach as it describes only the structure and payoffs of coalitions, whereas non-cooperative game theory also looks at how strategic interaction will affect the distribution of payoffs. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation.
The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players.
Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Furthermore, constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.
In short, the differences between sequential and simultaneous games are as follows:
Payoff matrix |
Many card games are games of imperfect information, such as poker and contract bridge. Perfect information is often confused with complete information, which is a similar concept pertaining to the common knowledge of each player's sequence, strategies, and payoffs throughout gameplay.
Bayesian game means a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist. For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium.
Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as Combinatorics and Abstract algebra (and sometimes non-constructive) proof methods to Solved game of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory. A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.
Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice.
A particular case of differential games are the games with a random time horizon. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval.
In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies.
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature"). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the Minimax is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen. (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard, whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis.
Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), Player 1 "moves" first by choosing either or (fair or unfair). Next in the sequence, Player 2, who has now observed Player 1s move, can choose to play either or (accept or reject). Once Player 2 has made their choice, the game is considered finished and each player gets their respective payoff, represented in the image as two numbers, where the first number represents Player 1's payoff, and the second number represents Player 2's payoff. Suppose that Player 1 chooses and then Player 2 chooses : Player 1 then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and Player 2 gets a payoff of "two".
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.
Formally, a characteristic function is a function denotes the power set of . from the set of all possible coalitions of players to a set of payments, and also satisfies . The function describes how much collective payoff a set of players can gain by forming a coalition.
Congestion game | 1973 | functions | subset of n-person games, simultaneous moves | No |
Sequential form (1994). 9780897916639 ISBN 9780897916639 | 1994 | matrices | 2-person games of imperfect information | No |
Timed games | 1994 | functions | 2-person games | Yes |
Gala | 1997 | logic | n-person games of imperfect information | No |
Graphical games | 2001 | graphs, functions | n-person games, simultaneous moves | No |
Local effect games | 2003 | functions | subset of n-person games, simultaneous moves | No |
GDL | 2005 | logic | deterministic n-person games, simultaneous moves | No |
Game Petri-nets | 2006 | Petri net | deterministic n-person games, simultaneous moves | No |
Continuous games | 2007 | functions | subset of 2-person games of imperfect information | Yes |
PNSI (2008). 9780769535142 ISBN 9780769535142 (2025). 9783642039577, Springer. ISBN 9783642039577 | 2008 | Petri net | n-person games of imperfect information | Yes |
Action graph games | 2012 | graphs, functions | n-person games, simultaneous moves | No |
Although pre-twentieth-century Natural history such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games.
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in the philosophy of language and philosophy of science. Game-theoretic arguments of this type can be found as far back as Plato. An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules". Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions.
Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
This research usually focuses on particular sets of strategies known as Solution concept. A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive.
Piraveenan (2019) in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory.
Piraveenan summarizes that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management.
In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios.
Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book An Economic Theory of Democracy, he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy.
It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime. Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively.
A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy.
However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities.
Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.
The tool, for example, automates the transformation of public vulnerability data into models, allowing defenders to synthesize optimal defence strategies through Stackelberg equilibrium analysis. This approach enhances cyber resilience by enabling defenders to anticipate and counteract attackers’ best responses, making game theory increasingly relevant in adversarial cybersecurity environments.
Ho et al. provide a broad summary of game theory applications in defence, highlighting its advantages and limitations across both physical and cyber domains.
In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 . suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication. The analysis of signaling games and Cheap talk has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's Butterfly Economics).
Biologists have used the game of chicken to analyze fighting behavior and territoriality.
According to Maynard Smith, in the preface to Evolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to that warn group members of a predator's approach, even when it endangers that individual's chance of survival. All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation , where the cost to the altruist must be less than the benefit to the recipient multiplied by the coefficient of relatedness . The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of , because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring. The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Separately, game theory has played a role in ; in particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games. Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms.
The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory and within it algorithmic mechanism design combine computational algorithm design and analysis of with economic theory.
Game theory has multiple applications in the field of artificial intelligence and machine learning. It is often used in developing autonomous systems that can make complex decisions in uncertain environment. Some other areas of application of game theory in AI/ML context are as follows - multi-agent system formation, reinforcement learning,
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993), Brian Skyrms (1990), and Robert Stalnaker (1999).
The synthesis of game theory with ethics was championed by R. B. Braithwaite.
In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see and ).
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., and ).
+ Standard prisoner's dilemma payoff matrix
!
! B stays silent ! B betrays |
Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge.
The dominant strategy (and therefore the best response to any possible opponent strategy), is to betray the other, which aligns with the sure-thing principle.
An example of the "battle of the sexes" can be seen in the portrayal of relationships in popular media, where men and women are often depicted as being fundamentally different and in conflict with each other. For instance, in some romantic comedies, the male and female protagonists are shown as having opposing views on love and relationships, and they have to overcome these differences in order to be together.
In this game, there are two pure strategy Nash equilibria: one where both the players choose the same strategy and the other where the players choose different options. If the game is played in mixed strategies, where each player chooses their strategy randomly, then there is an infinite number of Nash equilibria. However, in the context of the "battle of the sexes" game, the assumption is usually made that the game is played in pure strategies.
One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. The game demonstrates how social acceptance, fairness, and generosity influence the players decisions.
Ultimatum game has a variant, that is the dictator game. They are mostly identical, except in dictator game the responder has no power to reject the proposer's offer.
In the game, one player (the investor) is given a sum of money and must decide how much of it to give to another player (the trustee). The amount given is then tripled by the experimenter. The trustee then decides how much of the tripled amount to return to the investor. If the recipient is completely self interested, then he/she should return nothing. However that is not true as the experiment conduct. The outcome suggest that people are willing to place a trust, by risking some amount of money, in the belief that there would be reciprocity.
Lists
|
|